29 research outputs found

    A Taxonomy of Explainable Bayesian Networks

    Get PDF
    Artificial Intelligence (AI), and in particular, the explainability thereof, has gained phenomenal attention over the last few years. Whilst we usually do not question the decision-making process of these systems in situations where only the outcome is of interest, we do however pay close attention when these systems are applied in areas where the decisions directly influence the lives of humans. It is especially noisy and uncertain observations close to the decision boundary which results in predictions which cannot necessarily be explained that may foster mistrust among end-users. This drew attention to AI methods for which the outcomes can be explained. Bayesian networks are probabilistic graphical models that can be used as a tool to manage uncertainty. The probabilistic framework of a Bayesian network allows for explainability in the model, reasoning and evidence. The use of these methods is mostly ad hoc and not as well organised as explainability methods in the wider AI research field. As such, we introduce a taxonomy of explainability in Bayesian networks. We extend the existing categorisation of explainability in the model, reasoning or evidence to include explanation of decisions. The explanations obtained from the explainability methods are illustrated by means of a simple medical diagnostic scenario. The taxonomy introduced in this paper has the potential not only to encourage end-users to efficiently communicate outcomes obtained, but also support their understanding of how and, more importantly, why certain predictions were made

    Medicine in words and numbers: a cross-sectional survey comparing probability assessment scales

    Get PDF
    Contains fulltext : 56355.pdf ( ) (Open Access)Background / In the complex domain of medical decision making, reasoning under uncertainty can benefit from supporting tools. Automated decision support tools often build upon mathematical models, such as Bayesian networks. These networks require probabilities which often have to be assessed by experts in the domain of application. Probability response scales can be used to support the assessment process. We compare assessments obtained with different types of response scale. Methods / General practitioners (GPs) gave assessments on and preferences for three different probability response scales: a numerical scale, a scale with only verbal labels, and a combined verbal-numerical scale we had designed ourselves. Standard analyses of variance were performed. Results / No differences in assessments over the three response scales were found. Preferences for type of scale differed: the less experienced GPs preferred the verbal scale, the most experienced preferred the numerical scale, with the groups in between having a preference for the combined verbal-numerical scale. Conclusion / We conclude that all three response scales are equally suitable for supporting probability assessment. The combined verbal-numerical scale is a good choice for aiding the process, since it offers numerical labels to those who prefer numbers and verbal labels to those who prefer words, and accommodates both more and less experienced professionals.8 p

    A randomised controlled trial of a brief online mindfulness-based intervention on paranoia in a non-clinical sample

    Get PDF
    Paranoia is common and distressing in the general population and can impact on health, emotional well-being and social functioning, such that effective interventions are needed. Brief online mindfulness-based interventions (MBIs) have been shown to reduce symptoms of anxiety and depression in non-clinical samples, however at present there is no research investigating whether they can reduce paranoia. The current study explored whether a brief online MBI increased levels of mindfulness and reduced levels of paranoia in a non-clinical population. The mediating effect of mindfulness on any changes in paranoia was also investigated. One hundred and ten participants were randomly allocated to either a two week online MBI including 10 minutes of daily guided mindfulness practice or to a waitlist control condition. Measures of mindfulness and paranoia were administered at baseline, post-intervention and one-week follow-up. Participants in the MBI group displayed significantly greater reductions in paranoia compared to the waitlist control group. Mediation analysis demonstrated that change in mindfulness skills (specifically the observe, describe and nonreact facets of the FFMQ) mediated the relationship between intervention type and change in levels of paranoia. This study provides evidence that a brief online MBI can significantly reduce levels of paranoia in a non-clinical population. Furthermore, increases in mindfulness skills from this brief online MBI can mediate reductions in non-clinical paranoia. The limitations of the study are discussed

    Clinical evidence framework for Bayesian networks

    Get PDF
    There is poor uptake of prognostic decision support models by clinicians regardless of their accuracy. There is evidence that this results from doubts about the basis of the model as the evidence behind clinical models is often not clear to anyone other than their developers. In this paper, we propose a framework for representing the evidence-base of a Bayesian network (BN) decision support model. The aim of this evidence framework is to be able to present all the clinical evidence alongside the BN itself. The evidence framework is capable of presenting supporting and conflicting evidence, and evidence associated with relevant but excluded factors. It also allows the completeness of the evidence to be queried. We illustrate this framework using a BN that has been previously developed to predict acute traumatic coagulopathy, a potentially fatal disorder of blood clotting, at early stages of trauma care

    A simulation model shows how individual differences affect major life decisions

    No full text
    status: publishe

    Runtime Norm Revision Using Bayesian Networks

    No full text
    To guarantee the overall intended objectives of a multiagent systems, the behavior of individual agents should be controlled and coordinated. Such coordination can be achieved, without limiting the agents’ autonomy, via runtime norm enforcement. However, due to the dynamicity and uncertainty of the environment, the enforced norms can be ineffective. In this paper, we propose a runtime supervision mechanism that automatically revises norms when their enforcement appears to be ineffective. The decision to revise norms is taken based on a Bayesian Network that gives information about the likelihood of achieving the overall intended system objectives by enforcing the norms. Norms can be revised in three ways: relaxation, strengthening, and alteration. We evaluate the supervision mechanism on an urban smart traffic simulation

    Qualitative probabilistic relational models

    No full text
    International audienceProbabilistic relational models (PRMs) were introduced to extend the modelling and reasoning capacities of Bayesian networks from propositional to relational domains. PRMs are typically learned from relational data, by extracting from these data both a dependency structure and its numerical parameters. For this purpose, a large and rich data set is required, which proves prohibitive for many real-world applications. Since a PRM's structure can often be readily elicited from domain experts, we propose manual construction by an approach that combines qualitative concepts adapted from qualitative probabilistic networks (QPNs) with stepwise quantification. To this end, we introduce qualitative probabilistic relational models (QPRMs) and tailor an existing algorithm for qualitative probabilistic inference to these new models

    Constructing bayesian network graphs from labeled arguments

    Get PDF
    Bayesian networks (BNs) are powerful tools that are well-suited for reasoning about the uncertain consequences that can be inferred from evidence. Domain experts, however, typically do not have the expertise to construct BNs and instead resort to using other tools such as argument diagrams and mind maps. Recently, a structured approach was proposed to construct a BN graph from arguments annotated with causality information. As argumentative inferences may not be causal, we generalize this approach to include other types of inferences in this paper. Moreover, we prove a number of formal properties of the generalized approach and identify assumptions under which the construction of an initial BN graph can be fully automated
    corecore